Neural Networks with Superexpressive Activations and Integer Weights

نویسندگان

چکیده

An example of an activation function $$\sigma $$ is given such that networks with activations $$\{\sigma , \lfloor \cdot \rfloor \}$$ integer weights and a fixed architecture depending only on the input dimension d approximate continuous functions $$[0,1]^d$$ . The range required for $$\varepsilon -approximation Hölder derived, which, together our discrete choice weights, allows to obtain number needed attain approximation rate. Combining this obtained speed applying oracle inequality we get prediction rate $$n^{\frac{-2\beta }{2\beta +d}}\log _2n$$ neural network regression estimation unknown $$\beta -Hölder n samples. Thus, up logarithmic factor $$\log attained coincides minimax error -smooth functions. As sizes are their integers, constructed not easily encodable but they also reduce problem finding best predictor simple procedure minimization over finite set candidates.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Neural Networks with Complex Activations and Connection Weights

The concept of neural networks is generalized to include complex connect ions between complex units. A mathemat ical model is presented. An expression for the network's energy as well as a complex learning rule are proposed. Th is innovation may lead to new neural network paradigms, architectures, and applications, and may help to better understand biological nervous systems. The similarity bet...

متن کامل

Quantized Neural Networks: Training Neural Networks with Low Precision Weights and Activations

We introduce a method to train Quantized Neural Networks (QNNs) — neural networks with extremely low precision (e.g., 1-bit) weights and activations, at run-time. At traintime the quantized weights and activations are used for computing the parameter gradients. During the forward pass, QNNs drastically reduce memory size and accesses, and replace most arithmetic operations with bit-wise operati...

متن کامل

Training Neural Networks with 3–bit Integer Weights

In this work we present neural network training algorithms, which are based on the differential evolution (DE) strategies introduced by Storn and Price [Journal of Global Optimization. 11:341–359, 1997]. These strategies are applied to train neural networks with 3–bit integer weights. Integer weight neural networks are better suited for hardware implementation than their real weight analogous. ...

متن کامل

Gated XNOR Networks: Deep Neural Networks with Ternary Weights and Activations under a Unified Discretization Framework

Although deep neural networks (DNNs) are being a revolutionary power to open up the AI era, the notoriously huge hardware overhead has challenged their applications. Recently, several binary and ternary networks, in which the costly multiplyaccumulate operations can be replaced by accumulations or even binary logic operations, make the on-chip training of DNNs quite promising. Therefore there i...

متن کامل

Neural Network Training with Constrained Integer Weights

In this contribution we present neural network training algorithms, which are based on the differential evolution (DE) strategies introduced by Storn and Price [Journal of Global Optimization 11, 341-359, 19971. These strategies are applied to train neural networks with small integer weights. Such neural networks are better suited for hardware implementation than the real weight ones. Furthermo...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Lecture notes in networks and systems

سال: 2022

ISSN: ['2367-3370', '2367-3389']

DOI: https://doi.org/10.1007/978-3-031-10464-0_30